Free Download Sign-up Form
* Email
First Name
* = Required Field


Mind Your Head Brain Training Book by Sue Stebbins and Carla Clark
New!
by Sue Stebbins &
Carla Clark

Paperback Edition

Kindle Edition

Are You Ready to Breakthrough to Freedom?
Find out
Take This Quiz

Business Breakthrough CDs

Over It Already

Amazing Clients
~ Ingrid Dikmen Financial Advisor, Senior Portfolio Manager


~ Mike M - Finance Professional

Social Media Sue Stebbins on Facebook

Visit Successwave's Blog!

Subscribe to the Successwaves RSS Feed

A Non-Symbolic Theory of Conscious Content- Imagery and Activity

Nigel J.T. Thomas

1 | 2 | 3 | 4

Page 1

Source: http://www.imagery-imagination.com/nonsym.htm

Until a few years ago, Cognitive Science was firmly wedded to the notion that cognition must be explained in terms of the computational manipulation of internal representations or symbols. Although many people still believe this, the consensus is no longer solid. Whether it is truly threatened by connectionism is, perhaps, controversial, but there are yet more radical approaches that explicitly reject it. Advocates of "embodied" or "situated" approaches to cognition (e.g., Smith, 1991; Varelaet al, 1991, Clancey, 1997) argue that thought cannot be understood as entirely internal. Furthermore, it is argued that autonomous robots can be designed to behave more intelligently if representationalist programming techniques are avoided (Brooks, 1991), and that the way our brains control our behavior is better understood in terms of chaos and dynamical systems theory rather than as any sort computation (e.g., Freeman & Skarda, 1990; Van Gelder & Port, 1995; Van Gelder, 1995; Garson, 1996).

It is controversial whether these approaches to cognition can really be understood coherently without somehow making appeal to the notion of computation over representations, but that is not the question I want to take up here. I am concerned, instead, with how theories of this type can address the nature of our subjective experience of thinking as compared with more traditional (symbolic) cognitive theories. First thoughts may seem to suggest that the newer approaches will measure up poorly. Whereas traditional theorists have always had a lot to say about mental representations and mental processing, non-representational roboticists seem to have little concern with such things, being much more interested in achieving systems capable of autonomous and intelligent behavior, regardless of what goes on inside them to achieve this. One criticism of the application of dynamical systems theory to cognition has been that through its rejection of mental representation it effectively abnegates the study of the mind, and heralds a return to the aridities of Behaviorism (Eliasmith, 1996).

However, the relationship between symbolicism and the explanation of subjectivity is itself quite complex. On the one hand, one of the main sources of the paradigm was the Carnegie-Mellon school of Artificial Intelligence research. This work relied, in large part, on the technique of protocol analysis (Newell & Simon, 1972; Ericsson & Simon, 1980). Typically, a subject would be given a puzzle to solve and would talk through the solution process out loud, into a tape recorder. The protocols thus collected would be analyzed to determine the terms and steps of a successful solution strategy, and a computer would then be programmed to solve similar types of problems using a formally similar strategy. The terms consciously used by the human subject became the symbolic tokens manipulated by the program. It was clearly intended that the computational symbols should be taken as modeling the conscious contents in the human solver's mind, and that actual mental contents in humans were to be understood as computational symbols.

Reinforcing this trend was the development and adoption of LISP as the workhorse language of AI. No doubt LISP has many relevant virtues, but amongst them, surely, is the fact that its superficial syntax is excellently adapted to representing natural language sentences and their parsing. A first stab at representing a sentence involves little more than enclosing it in parentheses, and its syntax can plausibly be tackled by complexifying the list structure in just the sorts of ways that LISP provides for. Again, the implicit message, if not necessarily always the explicit claim, was that the English words, phrases and sentences that we consciously hear, or prepare to speak, or say silently to ourselves when we think, are directly equivalent to the LISP atoms and structured lists of an AI program.

In the 1960s and 70s psychological thinking was ripe for a reaction against the Behaviorist paradigm that denied the reality (or, at least, the scientific significance) of subjective thought, and this sort of Artificial Intelligence work seemed to hold out the hope of being able to take the mind seriously once again without abandoning scientific rigor. The case for the psychological and philosophical significance of AI was argued forcefully and influentially on just these grounds by Boden (1977), who presents it as providing a solid scientific foundation for a more 'humanistic' sort of psychology.

 

.

1 | 2 | 3 | 4

We Make it Easy to Succeed
Successwaves, Intl.
Brain Based Accelerated Success Audios

Successwaves Smart Coaching Audio